Skip to main content

View Post [edit]

Poster: Vladovlado Date: Dec 3, 2004 6:27am
Forum: petabox Subject: Massive network storage 2

Sorry, my previous post is not very understandable.

The best price/reliability, in my experience, comes from using standard 1u boxes. 4 SATA/HDD drives built-in are easy to find, dual gigabit Ethernet NIC on motherboard are standard as well.

Pricing on such systems will be in the $2,000 to $3,000 per TB depending on CPU, amount of memory and bells & whistles. $2,500 is a good estimate, at which you can buy quaility components and RAM. This results in $2.5M for the whole system.

Using 240GB HDD, a single U box will fit a Terabyte together with 1 or 3 CPU, and sufficient network connectivity to provide 40-80MB/sec at NFS level. The dual GE ports can be used to set up a redundant network, otherwise a single switch failure would put between 12 and 48 TB out of business.

The combination of 1-2 CPU with 1TB of disk space and 1-2 Gbit/sec of bandwidth is inherently balanced for file access work - the only possible bottleneck in the system remains network. 24-port GE switches cost about as little as the 10/100 hardware, so I would recommend using those.

Power consumption will be at the rate of 50-60W per TB, so 60KW should be enough for the whole PB system.

Finally, with Tier 1/2 suppliers, the MTBF of a single box will be about 365 days; at 1 TB per box in a PB system, you will have an average of 3 boxes failing every 24 hours. You need something to monitor their health, identify the failed boxes and send email to the admin.

The only drawback of such system is density: using standard 1U form factor yelds about 40TB per rack when you leave room for switches, which is less then your design goal of 100TB per rack. If you can tolerate the lower density, price, performance, power consumption and airflow will be very balanced.

Finally, on the software side it may make sense to take a look at clustered file systems such as Lustre. The primary value of such system is that it will: (a) manage the distribution of data on disks in a load-balanced way, which is not an easy task at these system sizes; (b) handle mirroring and striping to ensure uninterrupted operation when one or more boxes fail, and (c) scale performance linearly with the system size. The last is quite important when you move data in and out of the system - even with gigabit Ethernet, a signle-stream file copy will only move 140GB per hour, so it will take 7hrs to move a terabyte from one place to another.

Hope this helps.